713 research outputs found
Integrated membrane systems for toxic cyanobacteria removal
Blue-green algae (cyanobacteria) provide a major problem for the water industry as they can produce metabolites toxic to humans in addition to taste and odour (T&O) compounds that make drinking water aesthetically displeasing. This problem is likely to be intensified by the effects of climate change through reservoir warming. Microfiltration (MF) provides a potential solution for dealing with blooms of toxic blue green algae. In the past, coagulation and sand filtration have been successfully used for removal of cyanobacteria, however as membrane technology has become more economically viable, the demand for information on the application of membranes for cyanobacterial metabolite removal has increased. MF pore size is a key matter as cyanobacterial metabolites should permeate such membranes. However, if cyanobacterial metabolites remain within the algal cells MF might be effective through the removal of these intact cells. This study investigated an integrated membrane system incorporating coagulation, powdered activated carbon and MF for the removal of intracellular and extracellular cyanobacterial metabolites. A laboratory scale MF unit was designed and studied. It utilised PVDF fibres with a nominal 0.02 micron pore size. Three species of blue-green algae were tested and three different coagulants were used on each species for removal of intact cells. Powdered activated carbon (PAC) was dosed prior to the MF at 20mg/L to remove extracellular metabolites. Cell counts as well as analysis for total and extracellular toxin and T&O were undertaken to assess each stage of the IMS. The results of this study are promising.Mike Dixon, Brian O'Neill, Yann Richard, Lionel Ho, Chris Chow and Gayle Newcombehttp://www.chemeca2010.com/abstract/460.as
EgoHumans: An Egocentric 3D Multi-Human Benchmark
We present EgoHumans, a new multi-view multi-human video benchmark to advance
the state-of-the-art of egocentric human 3D pose estimation and tracking.
Existing egocentric benchmarks either capture single subject or indoor-only
scenarios, which limit the generalization of computer vision algorithms for
real-world applications. We propose a novel 3D capture setup to construct a
comprehensive egocentric multi-human benchmark in the wild with annotations to
support diverse tasks such as human detection, tracking, 2D/3D pose estimation,
and mesh recovery. We leverage consumer-grade wearable camera-equipped glasses
for the egocentric view, which enables us to capture dynamic activities like
playing tennis, fencing, volleyball, etc. Furthermore, our multi-view setup
generates accurate 3D ground truth even under severe or complete occlusion. The
dataset consists of more than 125k egocentric images, spanning diverse scenes
with a particular focus on challenging and unchoreographed multi-human
activities and fast-moving egocentric views. We rigorously evaluate existing
state-of-the-art methods and highlight their limitations in the egocentric
scenario, specifically on multi-human tracking. To address such limitations, we
propose EgoFormer, a novel approach with a multi-stream transformer
architecture and explicit 3D spatial reasoning to estimate and track the human
pose. EgoFormer significantly outperforms prior art by 13.6% IDF1 on the
EgoHumans dataset.Comment: Accepted to ICCV 2023 (Oral
Aria Digital Twin: A New Benchmark Dataset for Egocentric 3D Machine Perception
We introduce the Aria Digital Twin (ADT) - an egocentric dataset captured
using Aria glasses with extensive object, environment, and human level ground
truth. This ADT release contains 200 sequences of real-world activities
conducted by Aria wearers in two real indoor scenes with 398 object instances
(324 stationary and 74 dynamic). Each sequence consists of: a) raw data of two
monochrome camera streams, one RGB camera stream, two IMU streams; b) complete
sensor calibration; c) ground truth data including continuous
6-degree-of-freedom (6DoF) poses of the Aria devices, object 6DoF poses, 3D eye
gaze vectors, 3D human poses, 2D image segmentations, image depth maps; and d)
photo-realistic synthetic renderings. To the best of our knowledge, there is no
existing egocentric dataset with a level of accuracy, photo-realism and
comprehensiveness comparable to ADT. By contributing ADT to the research
community, our mission is to set a new standard for evaluation in the
egocentric machine perception domain, which includes very challenging research
problems such as 3D object detection and tracking, scene reconstruction and
understanding, sim-to-real learning, human pose prediction - while also
inspiring new machine perception tasks for augmented reality (AR) applications.
To kick start exploration of the ADT research use cases, we evaluated several
existing state-of-the-art methods for object detection, segmentation and image
translation tasks that demonstrate the usefulness of ADT as a benchmarking
dataset
The Lantern Vol. 28, No. 1, January 1961
• I Felt Horror That Day • John Ten • Term Paper: Circa 3032 A.D. • Villanelle • Lament • Joy of Bearded Boy • U.S. Foreign Policy: The Future • Contrast • Camp Crowder • Whispered Sounds • Pity, Love • Not Quite Free • Experiences of a Heroin Addict • The Hawk • The Second Apple • Reaction • Poor Family, Moving • Torch Ends Sputter in the Pall • Late Date • She\u27ll Call Mehttps://digitalcommons.ursinus.edu/lantern/1079/thumbnail.jp
FroDO: From Detections to 3D Objects
Object-oriented maps are important for scene understanding since they jointly
capture geometry and semantics, allow individual instantiation and meaningful
reasoning about objects. We introduce FroDO, a method for accurate 3D
reconstruction of object instances from RGB video that infers object location,
pose and shape in a coarse-to-fine manner. Key to FroDO is to embed object
shapes in a novel learnt space that allows seamless switching between sparse
point cloud and dense DeepSDF decoding. Given an input sequence of localized
RGB frames, FroDO first aggregates 2D detections to instantiate a
category-aware 3D bounding box per object. A shape code is regressed using an
encoder network before optimizing shape and pose further under the learnt shape
priors using sparse and dense shape representations. The optimization uses
multi-view geometric, photometric and silhouette losses. We evaluate on
real-world datasets, including Pix3D, Redwood-OS, and ScanNet, for single-view,
multi-view, and multi-object reconstruction.Comment: To be published in CVPR 2020. The first two authors contributed
equall
- …